15 research outputs found

    Data-driven methods for real-time dynamic stability assessment and control

    Get PDF
    Electric power systems are becoming increasingly complex to operate; a trend driven by an increased demand for electricity, large-scale integration of renewable energy resources, and new system components with power electronic interfaces. In this thesis, a new real-time monitoring and control tool that can support system operators to allow more efficient utilization of the transmission grid has been developed. The developed tool is comprised of four methods aimed to handle the following complementary tasks in power system operation: 1) preventive monitoring, 2) preventive control, 3) emergency monitoring, and 4) emergency control. The methods are based on recent advances in machine learning and deep reinforcement learning to allow real-time assessment and optimized control, while taking into account the dynamic stability of a power system. The developed method for preventive monitoring is proposed to be used to ensure a secure operation by providing real-time estimates of a power system’s dynamic security margins. The method is based on a two-step approach, where neural networks are first used to estimate the security margin, which then is followed by a validation of the estimates using a search algorithm and actual time-domain simulations. The two-step approach is proposed to mitigate any inconsistency issues associated with neural networks under new or unseen operating conditions. The method is shown to reduce the total computation time of the security margin by approximately 70 % for the given test system. Whenever the security margins are below a certain threshold, another developed method, aimed at preventive control, is used to determine the optimal control actions that can restore the security margins to a level above a pre-defined threshold. This method is based on deep reinforcement learning and uses a hybrid control scheme that is capable of simultaneously adjusting both discrete and continuous action variables. The results show that the developed method quickly learns an effective control policy to ensure a sufficient security margin for a range of different system scenarios. In case of severe disturbances and when the preventive methods have not been sufficient to guarantee a stable operation, system operators are required to rely on emergency monitoring and control methods. In the thesis, a method for emergency monitoring is developed that can quickly detect the onset of instability and predict whether the present system state is stable or if it will evolve into an alert or an emergency state in the near future. As time progresses and if new events occur in the system, the network can update the assessment continuously. The results from case studies show good performance and the network can accurately, within only a few seconds after a disturbance, predict voltage instability in almost all test cases. Finally, a method for emergency control is developed, which is based on deep reinforcement learning and is aimed to mitigate long-term voltage instability in real-time. Once trained, the method can continuously assess the system stability and suggest fast and efficient control actions to system operators in case of voltage instability. The control is trained to use load curtailment supplied from demand response and energy storage systems as an efficient and flexible alternative to stabilize the system. The results show that the developed method learns an effective control policy that can stabilize the system quickly while also minimizing the amount of required load curtailment

    Data-driven methods for real-time voltage stability assessment

    Get PDF
    Voltage instability is a phenomenon that limits the operation and the transmission capacity of a power system. An operation state close to the security limits enables a cost-effective utilization of the system but it could also make the system more vulnerable to disturbances. The transition towards a more sustainable energy system, with a growing share of renewable generation, will increase the complexity in voltage stability assessment and cause significant planning and operational challenges for transmission system operators.The overall aim of this thesis is to develop a real-time voltage stability assessment tool which can be used to assist transmission system operators in monitoring voltage security limits and to provide early warnings of possible voltage instability. The thesis first analyzes the difference between static and dynamic voltage security margins, both theoretically and numerically. The results of the analysis show that power systems with a high share of loads with fast restoration dynamics, such as induction motors or power electronic controlled loads, may cause conventional static methods to assess the voltage security margins to become unreliable. Methods relying on a dynamic assessment of the security margin are in these circumstances more reliable. However, dynamic assessment of voltage security margins is computationally challenging and can in most cases not be estimated in the time frame required by system operators in critical situations. To overcome this challenge, a machine learning-based method for fast and robust computing of the dynamic voltage security margin is proposed and tested in this thesis. The method, based on artificial neural networks, can provide real-time estimations of voltage security margins, which are then validated using a search algorithm and actual time-domain simulations. The two-step approach is proposed to mitigate any inconsistency issues associated with neural networks under new or unseen operating conditions. Finally, a new method for voltage instability prediction is developed. The method is proposed to be used as an online tool for system operators to predict the system’s near-future stability condition given the current operating state. The method uses a more advanced neural network based on long-short term memory. The results from case studies using the Nordic 32 test system show good performance and the network can accurately, within only a few seconds, predict voltage instability events in almost all test cases

    Deep Reinforcement Learning for Long-Term Voltage Stability Control

    Get PDF
    Deep reinforcement learning (DRL) is a machine learning-based method suited for complex and high-dimensional control problems. In this study, a real-time control system based on DRL is developed for long-term voltage stability events. The possibility of using system services from demand response (DR) and energy storage systems (ESS) as control measures to stabilize the system is investigated. The performance of the DRL control is evaluated on a modified Nordic32 test system. The results show that the DRL control quickly learns an effective control policy that can handle the uncertainty involved when using DR and ESS. The DRL control is compared to a rule-based load shedding scheme and the DRL control is shown to stabilize the system both significantly faster and with lesser load curtailment. Finally, when testing and evaluating the performance on load and disturbance scenarios that were not included in the training data, the robustness and generalization capability of the control were shown to be effective

    Fast dynamic voltage security marginestimation: concept and development

    Get PDF
    This study develops a machine learning-based method for a fast estimation of the dynamic voltage security margin(DVSM). The DVSM can incorporate the dynamic system response following a disturbance and it generally provides a bettermeasure of security than the more commonly used static voltage security margin (VSM). Using the concept of transient P - Vcurves, this study first establishes and visualises the circumstances when the DVSM is to prefer the static VSM. To overcomethe computational difficulties in estimating the DVSM, this study proposes a method based on training two separate neuralnetworks on a data set composed of combinations of different operating conditions and contingency scenarios generated usingtime-domain simulations. The trained neural networks are used to improve the search algorithm and significantly increase thecomputational efficiency in estimating the DVSM. The machine learning-based approach is thus applied to support theestimation of the DVSM, while the actual margin is validated using time-domain simulations. The proposed method was testedon the Nordic32 test system and the number of time-domain simulations was possible to reduce with ∼70%, allowing systemoperators to perform the estimations in near real-time

    Real-time security margin control using deep reinforcement learning

    Get PDF
    This paper develops a real-time control method based on deep reinforcement learning aimed to determine the optimal control actions to maintain a sufficient secure operating limit. The secure operating limit refers to the limit to the most stressed pre-contingency operating point of an electric power system that can withstand a set of credible contingencies without violating stability criteria. The developed deep reinforcement learning method uses a hybrid control scheme that is capable of simultaneously adjusting both discrete and continuous action variables. The performance is evaluated on a modified version of the Nordic32 test system. The results show that the developed deep reinforcement learning method quickly learns an effective control policy to ensure a sufficient secure operating limit for a range of different system scenarios. The performance is also compared to a control based on a rule-based look-up table and a deep reinforcement learning control adapted for discrete action spaces. The hybrid deep reinforcement learning control managed to achieve significantly better on all of the defined test sets, indicating that the possibility of adjusting both discrete and continuous action variables resulted in a more flexible and efficient control policy

    Real-time Security Margin Control Using Deep Reinforcement Learning

    Get PDF
    This paper develops a real-time control method based on deep reinforcement learning (DRL) aimed to determine the optimal control actions to maintain a sufficient secure operating limit (SOL). The SOL refers to the limit to the most stressed pre-contingency operating point of an electric power system that can withstand a set of credible contingencies without violating stability criteria. The developed DRL method uses a hybrid control scheme that is capable of simultaneously adjusting both discrete and continuous action variables. The performance is evaluated on a modified version of the Nordic32 test system. The results show that the developed DRL method quickly learns an effective control policy to ensure a sufficient SOL for a range of different system scenarios. The impact of measurement errors and unseen system conditions are also evaluated. While the DRL method manages to achieve good performance on a majority of the defined test scenarios, including measurement errors during the training phase would improve the robustness of the control with respect to random errors in the state signal. The performance is also compared to a conventional look-up table control where the advantages of the DRL method are highlighted

    Impact of static and dynamic load models on security margin estimation methods

    Get PDF
    The post-contingency loadability limit (PCLL) and the secure operating limit (SOL) are the two main approaches used when computing the security margins of an electric power system. While the SOL is significantly more computationally demanding than the PCLL, it can account for the dynamic response after a disturbance and generally provides a better measure of the security margin. In this study, the difference between these two methods is compared and analyzed for a range of different contingency and load model scenarios. A methodology to allow a fair comparison between the two security margins is developed and tested on a modified version of the Nordic32 test system. The study shows that the SOL can differ significantly from the PCLL, especially when the system has a high penetration of loads with constant power characteristics or a large share of induction motor loads with fast load restoration. The difference between the methods is also tested for different contingencies, where longer fault clearing times are shown to significantly increase the difference between the two margins

    Voltage Instability Prediction Using a Deep Recurrent Neural Network

    Get PDF
    This paper develops a new method for voltage instability prediction using a recurrent neural network with long short-term memory. The method is aimed to be used as a supplementary warning system for system operators, capable of assessing whether the current state will cause voltage instability issues several minutes into the future. The proposed method use a long sequence-based network, where both real-time and historic data are used to enhance the classification accuracy. The network is trained and tested on the Nordic32 test system, where combinations of different operating conditions and contingency scenarios are generated using time-domain simulations. The method shows that almost all N-1 contingency test cases were predicted correctly, and N-1-1 contingency test cases were predicted with over 95 % accuracy only seconds after a disturbance. Further, the impact of sequence length is examined, showing that the proposed long sequenced-based method provides significantly better classification accuracy than both a feedforward neural network and a network using a shorter sequence
    corecore